13 research outputs found
Recommended from our members
Deep Learning-based Prescription of Cardiac MRI Planes.
PurposeTo develop and evaluate a system to prescribe imaging planes for cardiac MRI based on deep learning (DL)-based localization of key anatomic landmarks.Materials and methodsAnnotated landmarks on 892 long-axis (LAX) and 493 short-axis (SAX) cine steady-state free precession series from cardiac MR images were retrospectively collected between February 2012 and June 2017. U-Net-based heatmap regression was used for localization of cardiac landmarks, which were used to compute cardiac MRI planes. Performance was evaluated by comparing localization distances and plane angle differences between DL predictions and ground truth. The plane angulations from DL were compared with those prescribed by the technologist at the original time of acquisition. Data were split into 80% for training and 20% for testing, and results confirmed with fivefold cross-validation.ResultsOn LAX images, DL localized the apex within mean 12.56 mm ± 19.11 (standard deviation) and the mitral valve (MV) within 7.68 mm ± 6.91. On SAX images, DL localized the aortic valve within 5.78 mm ± 5.68, MV within 5.90 mm ± 5.24, pulmonary valve within 6.55 mm ± 6.39, and tricuspid valve within 6.39 mm ± 5.89. On the basis of these localizations, average angle bias and mean error of DL-predicted imaging planes relative to ground truth annotations were as follows: SAX, -1.27° ± 6.81 and 4.93° ± 4.86; four chambers, 0.38° ± 6.45 and 5.16° ± 3.80; three chambers, 0.13° ± 12.70 and 9.02° ± 8.83; and two chamber, 0.25° ± 9.08 and 6.53° ± 6.28, respectively.ConclusionDL-based anatomic localization is a feasible strategy for planning cardiac MRI planes. This approach can produce imaging planes comparable to those defined by ground truth landmarks.© RSNA, 2019 Supplemental material is available for this article
Recommended from our members
Automated CT and MRI Liver Segmentation and Biometry Using a Generalized Convolutional Neural Network.
PurposeTo assess feasibility of training a convolutional neural network (CNN) to automate liver segmentation across different imaging modalities and techniques used in clinical practice and apply this to enable automation of liver biometry.MethodsWe trained a 2D U-Net CNN for liver segmentation in two stages using 330 abdominal MRI and CT exams acquired at our institution. First, we trained the neural network with non-contrast multi-echo spoiled-gradient-echo (SGPR)images with 300 MRI exams to provide multiple signal-weightings. Then, we used transfer learning to generalize the CNN with additional images from 30 contrast-enhanced MRI and CT exams.We assessed the performance of the CNN using a distinct multi-institutional data set curated from multiple sources (n = 498 subjects). Segmentation accuracy was evaluated by computing Dice scores. Utilizing these segmentations, we computed liver volume from CT and T1-weighted (T1w) MRI exams, and estimated hepatic proton- density-fat-fraction (PDFF) from multi-echo T2*w MRI exams. We compared quantitative volumetry and PDFF estimates between automated and manual segmentation using Pearson correlation and Bland-Altman statistics.ResultsDice scores were 0.94 ± 0.06 for CT (n = 230), 0.95 ± 0.03 (n = 100) for T1w MR, and 0.92 ± 0.05 for T2*w MR (n = 169). Liver volume measured by manual and automated segmentation agreed closely for CT (95% limit-of-agreement (LoA) = [-298 mL, 180 mL]) and T1w MR (LoA = [-358 mL, 180 mL]). Hepatic PDFF measured by the two segmentations also agreed closely (LoA = [-0.62%, 0.80%]).ConclusionsUtilizing a transfer-learning strategy, we have demonstrated the feasibility of a CNN to be generalized to perform liver segmentations across different imaging techniques and modalities. With further refinement and validation, CNNs may have broad applicability for multimodal liver volumetry and hepatic tissue characterization
The interactions of alcohol, sex, and stress
Human history is deeply intertwined with alcohol consumption. While alcohol use disorders (AUD) are often considered on an individual level they represent a societal problem, with increasing evidence for a dichotomy between men and women in their sequelae. It is known that stress impacts all aspects of the addiction cycle and while much work has been focused on the acute use of ethanol or withdrawal, many questions still remain about the transition to dependence and variation between sexes. This study sought to evolve our understanding of the changes occurring within the context of chronic ethanol exposure, as this is an area of investigation poised to significantly impact treatment paradigms. In chapter 2, preclinical studies were performed to elucidate the activation changes occurring in the stress responsive central nucleus of the amygdala (CeA) within chronic ethanol exposure on both a long term and short term scale, and to examine the effect of this chronic ethanol use on the stress response. Next, in chapter 3, anatomical approaches were utilized to link two major monoaminergic nuclei, the locus coeruleus (LC) and the dorsal raphe nucleus (DRN), by virtue of coordinate projections from the limbic stress nucleus, the CeA. The phenotype of these collateralized neurons was then identified as containing the key stress peptides corticotropin releasing factor (CRF) or dynorphin (DYN). Finally, in chapter 4, a molecular marker of the stress response, the CRFr, was examined in the LC using immunoelectron microscopy, and found to be dysregulated in a dichotomous fashion, potentially underlying some of the stress vulnerability seen in AUD. This study offers both molecular and circuitry targets that may be considered in future treatment paradigms, and highlights the importance of individualized treatment strategies for maximal patient benefit.
Recommended from our members
Clinical Performance and Role of Expert Supervision of Deep Learning for Cardiac Ventricular Volumetry: A Validation Study
PurposeTo evaluate the performance of a deep learning (DL) algorithm for clinical measurement of right and left ventricular volume and function across cardiac MR images obtained for a range of clinical indications and pathologies.Materials and methodsA retrospective, Health Insurance Portability and Accountability Act-compliant study was conducted using the first 200 noncongenital clinical cardiac MRI examinations from June 2015 to June 2017 for which volumetry was available. Images were analyzed using commercially available software for automated DL-based and manual contouring of biventricular volumes. Fully automated measurements were compared using Pearson correlations, relative volume errors, and Bland-Altman analyses. Manual, automated, and expert revised contours for 50 MR images were examined by comparing regional Dice coefficients at the base, midventricle, and apex to further analyze the contour quality.ResultsFully automated and manual left ventricular volumes were strongly correlated for end-systolic volume (ESV: Pearson r = 0.99, P < .001), end-diastolic volume (EDV: r = 0.97, P < .001), and ejection fraction (EF: r = 0.94, P < .001). Right ventricular measurements were also correlated for ESV (r = 0.93, P < .001), EDV (r = 0.92, P < .001), and EF (r = 0.73, P < .001). Visual inspection of segmentation quality showed most errors (73%) occurred at the cardiac base. Mean Dice coefficients between manual, automated, and expert revised contours ranged from 0.92 to 0.95, with greatest variance at the base and apex.ConclusionFully automated ventricular segmentation by the tested algorithm provides contours and ventricular volumes that could be used to aid expert segmentation, but can benefit from expert supervision, particularly to resolve errors at the basal and apical slices. Supplemental material is available for this article. © RSNA, 2020
Recommended from our members
CNN-based Deformable Registration Facilitates Fast and Accurate Air Trapping Measurements at Inspiratory and Expiratory CT
PurposeTo develop a convolutional neural network (CNN)-based deformable lung registration algorithm to reduce computation time and assess its potential for lobar air trapping quantification.Materials and methodsIn this retrospective study, a CNN algorithm was developed to perform deformable registration of lung CT (LungReg) using data on 9118 patients from the COPDGene Study (data collected between 2007 and 2012). Loss function constraints included cross-correlation, displacement field regularization, lobar segmentation overlap, and the Jacobian determinant. LungReg was compared with a standard diffeomorphic registration (SyN) for lobar Dice overlap, percentage voxels with nonpositive Jacobian determinants, and inference runtime using paired t tests. Landmark colocalization error (LCE) across 10 patients was compared using a random effects model. Agreement between LungReg and SyN air trapping measurements was assessed using intraclass correlation coefficient. The ability of LungReg versus SyN emphysema and air trapping measurements to predict Global Initiative for Chronic Obstructive Lung Disease (GOLD) stages was compared using area under the receiver operating characteristic curves.ResultsAverage performance of LungReg versus SyN showed lobar Dice overlap score of 0.91-0.97 versus 0.89-0.95, respectively (P < .001); percentage voxels with nonpositive Jacobian determinant of 0.04 versus 0.10, respectively (P < .001); inference run time of 0.99 second (graphics processing unit) and 2.27 seconds (central processing unit) versus 418.46 seconds (central processing unit) (P < .001); and LCE of 7.21 mm versus 6.93 mm (P < .001). LungReg and SyN whole-lung and lobar air trapping measurements achieved excellent agreement (intraclass correlation coefficients > 0.98). LungReg versus SyN area under the receiver operating characteristic curves for predicting GOLD stage were not statistically different (range, 0.88-0.95 vs 0.88-0.95, respectively; P = .31-.95).ConclusionCNN-based deformable lung registration is accurate and fully automated, with runtime feasible for clinical lobar air trapping quantification, and has potential to improve diagnosis of small airway diseases.Keywords: Air Trapping, Convolutional Neural Network, Deformable Registration, Small Airway Disease, CT, Lung, Semisupervised Learning, Unsupervised Learning Supplemental material is available for this article. © RSNA, 2021 An earlier incorrect version of this article appeared online. This article was corrected on December 22, 2021
Recommended from our members
Machine Learning and Deep Neural Networks in Thoracic and Cardiovascular Imaging.
Advances in technology have always had the potential and opportunity to shape the practice of medicine, and in no medical specialty has technology been more rapidly embraced and adopted than radiology. Machine learning and deep neural networks promise to transform the practice of medicine, and, in particular, the practice of diagnostic radiology. These technologies are evolving at a rapid pace due to innovations in computational hardware and novel neural network architectures. Several cutting-edge postprocessing analysis applications are actively being developed in the fields of thoracic and cardiovascular imaging, including applications for lesion detection and characterization, lung parenchymal characterization, coronary artery assessment, cardiac volumetry and function, and anatomic localization. Cardiothoracic and cardiovascular imaging lies at the technological forefront of radiology due to a confluence of technical advances. Enhanced equipment has enabled computed tomography and magnetic resonance imaging scanners that can safely capture images that freeze the motion of the heart to exquisitely delineate fine anatomic structures. Computing hardware developments have enabled an explosion in computational capabilities and in data storage. Progress in software and fluid mechanical models is enabling complex 3D and 4D reconstructions to not only visualize and assess the dynamic motion of the heart, but also quantify its blood flow and hemodynamics. And now, innovations in machine learning, particularly in the form of deep neural networks, are enabling us to leverage the increasingly massive data repositories that are prevalent in the field. Here, we discuss developments in machine learning techniques and deep neural networks to highlight their likely role in future radiologic practice, both in and outside of image interpretation and analysis. We discuss the concepts of validation, generalizability, and clinical utility, as they pertain to this and other new technologies, and we reflect upon the opportunities and challenges of bringing these into daily use
Mammographic Breast Density Model Using Semi-Supervised Learning Reduces Inter-/Intra-Reader Variability
Breast density is an important risk factor for breast cancer development; however, imager inconsistency in density reporting can lead to patient and clinician confusion. A deep learning (DL) model for mammographic density grading was examined in a retrospective multi-reader multi-case study consisting of 928 image pairs and assessed for impact on inter- and intra-reader variability and reading time. Seven readers assigned density categories to the images, then re-read the test set aided by the model after a 4-week washout. To measure intra-reader agreement, 100 image pairs were blindly double read in both sessions. Linear Cohen Kappa (κ) and Student’s t-test were used to assess the model and reader performance. The model achieved a κ of 0.87 (95% CI: 0.84, 0.89) for four-class density assessment and a κ of 0.91 (95% CI: 0.88, 0.93) for binary non-dense/dense assessment. Superiority tests showed significant reduction in inter-reader variability (κ improved from 0.70 to 0.88, p ≤ 0.001) and intra-reader variability (κ improved from 0.83 to 0.95, p ≤ 0.01) for four-class density, and significant reduction in inter-reader variability (κ improved from 0.77 to 0.96, p ≤ 0.001) and intra-reader variability (κ improved from 0.89 to 0.97, p ≤ 0.01) for binary non-dense/dense assessment when aided by DL. The average reader mean reading time per image pair also decreased by 30%, 0.86 s (95% CI: 0.01, 1.71), with six of seven readers having reading time reductions
Recommended from our members
Reader Perceptions and Impact of AI on CT Assessment of Air Trapping
Quantitative imaging measurements can be facilitated by artificial intelligence (AI) algorithms, but how they might impact decision-making and be perceived by radiologists remains uncertain. After creation of a dedicated inspiratory-expiratory CT examination and concurrent deployment of a quantitative AI algorithm for assessing air trapping, five cardiothoracic radiologists retrospectively evaluated severity of air trapping on 17 examination studies. Air trapping severity of each lobe was evaluated in three stages: qualitatively (visually); semiquantitatively, allowing manual region-of-interest measurements; and quantitatively, using results from an AI algorithm. Readers were surveyed on each case for their perceptions of the AI algorithm. The algorithm improved interreader agreement (intraclass correlation coefficients: visual, 0.28; semiquantitative, 0.40; quantitative, 0.84; P < .001) and improved correlation with pulmonary function testing (forced expiratory volume in 1 second-to-forced vital capacity ratio) (visual r = -0.26, semiquantitative r = -0.32, quantitative r = -0.44). Readers perceived moderate agreement with the AI algorithm (Likert scale average, 3.7 of 5), a mild impact on their final assessment (average, 2.6), and a neutral perception of overall utility (average, 3.5). Though the AI algorithm objectively improved interreader consistency and correlation with pulmonary function testing, individual readers did not immediately perceive this benefit, revealing a potential barrier to clinical adoption. Keywords: Technology Assessment, Quantification © RSNA, 2021